- Indian Journal of Innovations and Developments
- Indian Journal of Education and Information Management
- Indian Journal of Advances in Computer Sciences and Technology
- ScieXplore: International Journal of Research in Science
- Programmable Device Circuits and Systems
- Digital Image Processing
- International Journal of Knowledge Based Computer System
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Kuppusamy, K.
- Digital Image Compresssion using SPARSE MATRIX
Authors
1 Department of Computer Science, Alagappa Govt. Arts College, Karaikudi, IN
2 Department of Computer Science and Engineering, Alagappa University, Karaikudi, IN
Source
Indian Journal of Innovations and Developments, Vol 1, No 10 (2012), Pagination: 755-757Abstract
The multimedia applications are widely adopted in all the fields and in the day-to-day activities. All multimedia file storages are attempted to store in tiny devices with minimal memory area. The storage process and its functionalities differ with one another, but the logical processes are same. As per the file storage techniques, the file format differ with one another. The resource availability, utilizations become a challenging task to the multimedia user. Various research works are carried out in the field of file size minimization especially of images and video. Most of the file format presentation and minimization work are carried out in the wavelet algorithms. The same work has been done by the researchers by using mathematical approach in which the digital image is converted into sparse matrix. However, the video compression is not at the appreciated level of the researchers. There are many possible avenues for the researches to reduce the frame image into minimized level in terms of size, storage approach and retrieval process of the application presentations. This research work attempted to reduce the image storage size via the functional process of the reduction mathematical tool approach, such as sparse matrix. In the sparse matrix reduction approach, the video image is converted into frames according to the scaling standards. The frames converted from the original accessible file format into three-dimensional layer based mathematical set. Each frames sequences is named and generated and then compared with transactional mapping set. The transactional binary set, one represents the difference of the pixel value and the zero represents the equivalent pixels. From the transactional matrix, the sparse matrix is generated and compared with the converted three-dimensional matrix. If the sparse is combined with the first conventional sparse that could generate the sequence of next frame. The overall numerical representation of the image and its size after decompression is compared to the original image model and its size.Keywords
Video Compression, Sparse Matrix, Image CompressionReferences
- Breiman, L (1996) Bagging Predictors. Machine Learning Vol. 26(2) 123–140.
- Džeroski, S (2006) Using decision trees to predict forest stand height and canopy cover from LANSAT and LIDAR data. In: Managing environmental knowledge:EnviroInfo 2006: proceedings of the 20th InternationalConference on Informatics for En vironmental Protection,Aachen: Shaker Verlag, pg. 125-133.
- Efron, B (1993). An Introduction to the Bootstrap. Chapman and Hall, New York, (1993)
- Hyyppa, H (1992) Algorithms and Methods of Airborne Laser-Scanning for Forest Measurements. rnational Archives of Photo grammetry and Remote Sensing, Vol XXXVI, 8/W2, Freiburg, Germany, (2004)Quinlan, J. R.: Learning with continuous classes. In Proceedings of the 5th Australian Joint Conference on Artificial Intelligence, pages 343–348. World Scientific,Singapore.
- Raymond, M. (1992) Measures. Laser remote sensing: fundamentals and applications. Malabar, Fla., Krieger Pub. Co. 510 p. G70.6.M4 (1992)
- SFS Slovenian Forestry Service (1998) Slovenian forest and forestry. Zavod za gozdove RS, 24 pp (1998)
- Witten, I. (2005) Data Mining: Practical machine learning tools and techniques with Java implementations. Second Edition, Morgan Kaufman, (2005)
- Clementking, A, Angel Latha Mary, S (2009). Comparing and identifying common factors in frequent item set algorithms in asso ciation rule 18-20 Dec. 2008 ISBN: 978-1-4244-3594-4- 10.1109/ICCCNET.2008.4787769.
- Clementking, A (2008) Angel Latha Mary, S Study on Frequent Item set algorithms in association rule in International Journal of Algorithms, Computing and Mathematics Vol-1, NA.ov-1 2008 by Eashwar Publications.
- Digital Image Compression using Sparse Matrix
Authors
1 Department of Computer Science, Alagappa Govt. Arts College, Karaikudi-3, Tamil Nadu, IN
2 Department of Computer Science and Engineering, Alagappa University, Karaikudi-3, Tamil Nadu, IN
Source
Indian Journal of Education and Information Management, Vol 1, No 5 (2012), Pagination: 182-184Abstract
The multimedia applications are widely adopting in all the fields and in the day-to-day activities. All multimedia file storages are attempted to store in tiny devices with minimal memory area. The storage process and its functionalities differs with one another, but the logical processes are same. As per the file storage techniques, the file format differ one with another. The resource availability, utilizations become a challenging task to the multimedia user. Various research works are carried out in the field of file size minimization especially of images and video. Most of the file format presentation and minimization work carried out in the wavelet algorithms. The same work has been done by the researchers by using mathematical approach in which the digital image is converted into sparse matrix. However, the video compression is not at the appreciated level of the researchers. There are many possible avenues for the researches to reduce the frame image into minimized level in terms of size, storage approach and retrieval process of the application presentations. This research work attempted to reduces the image storage size via the functional process of the reduction mathematical tool approach, such as sparse matrix. In the sparse matrix reduction approach, the video image is converted into frames according to the scaling standards. The frames converted from the original accessible file format into three-dimensional layer based mathematical set. Each frames sequences is named and generated and then compared with transactional mapping set. The transactional binary set, one represents the difference of the pixel value and the zero represents the equivalent pixels. From the transactional matrix, the sparse matrix is generated and compared with the converted three-dimensional matrix. If the sparse is combined with the first conventional sparse that could generate the sequence of next frame. The overall numerical representation of the image and is size after decompression is compared to the original image model and its size.Keywords
Video Compression, Sparse Matrix, Image CompressionReferences
- Breiman L (1996) Bagging Predictors. Machine Learning, 26(2), 123–140.
- Džeroski S (2006) Using decision trees to predict forest stand height and canopy cover from LANSAT and LIDAR data. In: Managing environmental knowledge: EnviroInfo 2006: proceedings of the 20th International Conference on Informatics for Environmental Protection, Aachen: Shaker Verlag, pp. 125-133.
- Efron B (1993) An Introduction to the Bootstrap. Chapman and Hall, New York.
- Hyyppa H (2004) Algorithms and Methods of Airborne Laser-Scanning for Forest Measurements. Rational Archives of Photogrammetric and Remote Sensing, Vol.36, 8/W2, Freiburg, Germany.
- Quinlan JR (1992) Learning with continuous classes. In Proceedings of the 5th Australian Joint Conference on Artificial Intelligence, World Scientific, Singapore, 343– 348.
- Raymond M (1992) Measures. Laser remote sensing: fundamentals and applications. Malabar, Fla., Krieger Pub. Co. 510 p. G70.6.M4.
- Zavod za gozdove RS (1998) SFS Slovenian Forestry Service: Slovenian forest and forestry, pp.24.
- Witten I (2005) Data Mining: Practical machine learning tools and techniques with Java implementations. Second Edition.
- Finding Unknown Malware Object
Authors
1 Alagappa University, Karaikudi, Tamilnadu, IN
2 Department of Computer Science and Engineering Alagappa University, Karaikudi, Tamilnadu, IN
Source
Indian Journal of Advances in Computer Sciences and Technology, Vol 1, No 1 (2013), Pagination: 25-36Abstract
This paper focus to "Finding Unknown Malware Object" describes the state's overall requirements regarding the acquisition and implementation of intrusion prevention and detection systems with intelligence (IIPS/IIDS). This is designed to provide a deeper understanding of intrusion prevention and detection principles with intelligence may be responsible for acquiring, implementing or monitoring such systems in understanding the technology and strategies available.Keywords
Intelligence Intrusion Detection Prevention System, IIDPS, Unknown Malware- Preparation and Characterization of Some New Hydrazinium Carboxylates
Authors
1 Department of Chemistry, Kongunadu Arts and Science College, Coimbatore – 641 029, IN
Source
ScieXplore: International Journal of Research in Science, Vol 2, No 1 (2015), Pagination: 13-18Abstract
Hydrazinium salts of aromatic carboxylic acids were prepared by neutralization of acid with hydrazine hydrate and characterized by analytical, IR spectral and TG-DTA analysis. All the compounds undergo decomposition yielding carbon residue as the end product. The in vitro antibacterial study of 2,4-dichlorophenoxyacetic acid and its hydrazinium salt against Escherichia Coli have been investigated and the results show that the as-prepared hydrazinium salts have better antibacterial activity than the free acid.Keywords
2,4-Dichlorophenoxyacetic Acid, Antibacterial Activity, Aromatic Carboxylic Acids, Hydrazinium Salt, IR spectral, TG – DTA.References
- Schmidt E. W., Hydrazine and its Derivatives-Preparation, Properties and Applications, New York: Wiley Interscience; 1984.
- Patil K. C., Soundararajan R., Paiverneker V. R., Inorg Chem, Vol. 18 pp. 1969, 1979.
- Govindarajan S., Patil K. C., Poojary M. D., Monohar H., Inorg Chim Acta,Vol. 120, pp. 103, 1986.
- Govindarajan S., Patil K .C., Manohar H., Werner P. E., J Chem Soc Dalton Trans, pp. 119, 1986.
- Yasodhai S., Sivakumar T., Govindarajan S., Thermochim Acta, Vol. 338, pp. 57, 1999.
- Patil K. C., Vittal J. P., Patel C. C., Thermochim Acta, Vol. 43, pp. 213, 1981.
- Premkumar T., Govindarajan S., World J Microbiol Biotechnol, Vol. 22, pp. 1105, 2006.
- Yasodhai S., Govindarajan S., Thermochim Acta, Vol. 338, pp. 113, 1999.
- Yasodhai S., Govindarajan S., J Therm Anal Cal, Vol. 62, pp. 737, 2000.
- Kuppusamy K., Sivashankar B. N., Govindarajan S., Thermochim Acta,Vol. 259, pp. 251, 1995.
- Vairam S., Govindarajan S., Thermochim Acta, Vol. 414, pp. 263, 2004.
- Balague C., Sturtz N., Duffard R., Evangelista de Duffard A. M., Environ Toxicol. Vol. 16, pp. 43, 2001.
- Genetic Algorithm to Optimize Test Cases for Simple Digital Circuits
Authors
1 Department of Computer Science and Engineering, Alagappa University, Karaikudi – 630 003, Tamilnadu, IN
2 Department of Computer Science and Engineering, Alagappa University, Karaikudi-630 003, Tamilnadu, IN
Source
Programmable Device Circuits and Systems, Vol 2, No 9 (2010), Pagination: 140-146Abstract
Time is Gold! This is a phrase that is being put to use by almost everyone today. Lesser the time spent in doing something, more is the money saved. That thought is at the back of every test engineer’s mind. With the advances in science and technology, modern devices are becoming more and more complex every day. As the device complexity increases, testing becomes even more complex. Circuits are shrinking in physical size while growing both in speed and range of capabilities. This rapid advancement is not without serious problems, however. Especially worrisome are verification and testing, which become more important as the system complexity increases and time-to-market decreases. This results in increased test time and higher test cost. At the same time, the manufacturing cost of a device is reduced due to the higher levels of integration. All this has contributed to a test cost that is an increasing fraction of the total manufacturing cost. Hence the necessity of reducing the test cost. To decrease the test cost, the time required to test a device needs to be decreased. So, we simply need to devise a test set that is small in size.
Many of the optimization problems in circuit design, layout, and test automation can be solved with high-quality results using genetic algorithms. Genetic algorithms have been very effective for circuits test generation, especially when combined with deterministic algorithm. In this paper, a new genetic approach to minimize test patterns for simple combinational circuits is presented. In the proposed work evolutionary principles are employed in test minimization stage alone. Results show that test sets generated using the new approach are more compact for many circuits.Keywords
Boolean Expression, Combinational Circuits, Heuristic Method, Fitness Function, Genetic Algorithm, Test Minimization.- An Easy Method for Clipping Regular/Irregular 2D Polygon
Authors
1 Department of Computer Science, Thiagarajar College, Madurai-625009, TN, IN
2 Department of Computer Science and Engineering, Alagappa University, Karaikudi, TN, IN
Source
Digital Image Processing, Vol 3, No 9 (2011), Pagination: 519-523Abstract
The paper presents a new simple clipping algorithm for 2D-polygon against rectangular windows. The polygon clipping process often involves a lot of intersection calculations and comparison. This paper propose a new 2D polygon clipping method, which makes only least number of edge and clipping region comparison and very few intersection calculations than any other traditional polygon clipping algorithms. The proposed algorithm is explained in step by step and compared with the Patrick-Gilles Maillot proposed method, and the Sutherland-Hodgeman algorithm and showing very less edge and clipping region comparison. The experimental results strongly support superiority of the proposed algorithm in comparison wise and it is theoretically and experimentally better than the conventional and Patrick-Gilles Maillot proposed method.Keywords
Clipping Region, Clipped Point, Gentle Slope, Polygon Clipping, Sharp Slope.- Object Oriented Video Segmentation Using Converged Mean Shift Algorithm
Authors
1 Department of Computer Science and Engineering, Alagappa University, Karaikudi, TN, IN
Source
Digital Image Processing, Vol 3, No 9 (2011), Pagination: 544-550Abstract
The survey describes an approach for object-oriented video segmentation based on motion coherence. Using a tracking process, 2-D motion patterns are identified with an ensemble clustering approach. Particles are clustered to obtain a pixel-wise segmentation in space and time domains. The mean-shift algorithm for segmentation used here has a problem since; it requires the use of fixed bandwidth, reducing the magnitude and variety of detected motion patterns. To overcome this problem we propose the use of infinity norm which ensures that each evaluated pixel could be inside the neighborhood, and it avoids unnecessary calculations. The use of infinity norm ensures that each evaluated pixel could be inside the neighborhood and this fact avoids unnecessary calculations.Keywords
Ensemble Clustering, Motion Segmentation, Object-Based Video Segmentation, Point Tracking, Video Coding.- A New Fast Clipping Algorithm for 2D-Polygon against Rectangular Windows
Authors
1 Department of Computer Science, Thiagarajar College, Madurai – 625 009, TN, IN
2 Department of Computer Science and Engineering, Alagappa University, Karaikudi, TN, IN
Source
Digital Image Processing, Vol 3, No 1 (2011), Pagination: 63-67Abstract
Polygon clipping process often involves a lot of intersection calculations and comparison. One way for improving the efficiency of a polygon clipping algorithm is to save the unnecessary intersection calculations demanded by traditional algorithm by rejecting totally the edges lies outside the window. This paper presents a new 2D polygon clipping method, based on an extension to the Sutherland- Hodgman polygon clipping method in which the efficiency is improved either by rejecting the edges lies outside a boundary or by avoiding the comparison of clipping boundaries against the sides of a polygon which neither totally nor partially crossing a polygon. The proposed algorithm neither remembers so-called entry/exit intersection points and nor about union points. After discussing two basic polygon clipping algorithms, a different approach is proposed, explaining the principles of a new algorithm and presenting it step by step. An example implementation of the algorithm is given along with some results. A comparison between the proposed method, the Patrick-Gilles Maillot polygon clipping algorithm, and the Sutherland-Hodgman algorithm is also given, showing very less comparison of edge and clipping region boundary than the Sutherland-Hodgman algorithm, and the Patrick-Gilles Maillot proposed method.Keywords
Clipping Region, Line Intersection, Polygon Clipping, Window.- An Efficient Encryption Scheme for Color Images Using Transpose, Interweaving and Iteration Method
Authors
1 Dept. of Computer Science and Engineering, Alagappa University, Karaikudi-630003, Tamilnadu, IN
Source
Digital Image Processing, Vol 2, No 10 (2010), Pagination: 357-363Abstract
Many digital services require reliable security in storage and transmission of multimedia contents. Due to the rapid growth of the internet in the digital world today, the security of multimedia contents like digital images has become more important and attracted much attention. An encryption techniques gives protection against illegal duplication and manipulation of multimedia contents. The transposition or permutation of characters in the plaintext is responsible for confusion, and the influence of each bit of the key on each plaintext causes diffusion .The goal of this paper is to create cryptosystem to encrypt color images by shuffling pixel values using transpose, Interweaving and Iteration method on each block. In this method the given Image is divided in to number of blocks based on required size. The image pixel values of the transposed modified plaintext are rearranged by transposition of the binary bits belonging to the neighboring rows and columns in each iteration. The decimal values of modified interweaved matrix is multiplied with key matrix to yield cipher text of the image. This method gives strong cipher of the image, whose key length is significantly large.Keywords
Image Encryption, Interweaving, Inverse Interweaving, Modular Arithmetic Inverse.- An Enhanced Method for Filling a 2D-Polygon
Authors
1 Department of Computer Science in Thiagarajar College, Madurai, IN
2 Department of Computer Science and Engineering in Alagappa University, Karaikudi, IN
Source
Digital Image Processing, Vol 2, No 11 (2010), Pagination: 486-489Abstract
Polygon filling is a fundamental operation in computer graphics and image processing. The conventional polygon filling algorithms normally adopt an approach of scan line at a time generally called Scan_Line Algorithms typically uses many procedures and data structure of table (records) and fields. Using these procedures and data structures, the polygon filling process slows down because, many procedures which are involving to fill a polygon performing operation of sorting, reordering records and fields in a table and it makes these algorithm a time-consuming, tedious and very complex. This paper presents an enhanced 2D polygon filling method based on to the Scan-Line algorithm. After discussing simply about the Scan-Line filling algorithm and its procedures, a different approach is proposed and is explained step by step.
This proposed method, KN algorithm is fast, and uses simple data structure and less execution time, data reordering work and is independent of the polygon geometry. The experimental results strongly support superiority of the proposed algorithm in execution time wise and it is theoretically and experimentally better than the conventional algorithm.Keywords
Concave Polygon, End Points, Polygon Filling, Vertex.- A New Method for Filling a 2D-Polygon
Authors
1 Department of Computer Science in Thiagarajar College, Madurai, IN
2 Department of Computer Science and Engineering in Alagappa University, Karaikudi, IN
Source
Digital Image Processing, Vol 2, No 11 (2010), Pagination: 490-495Abstract
Polygon filling process often involves with lot of procedures which are repeatedly called by one another and making lot of calculations and data reordering works to fill a polygon. One way for improving the efficiency of a polygon filling algorithm is to save the unnecessary calculations and reordering works demanded by traditional algorithms either for rejecting some reordering works or for avoiding some sorting and calculations works. An adaptive filling algorithm is presented here to achieve this goal. The filling process of our new algorithm, NH algorithm, consists of five steps. Firstly, we read the value of edges of a polygon and avoiding the sorting of the sides which are normally required in a traditional algorithm. Secondly, we calculated all boundary points for each sides of a polygon between its maximum and minimum y value. Thirdly, we find all boundary points for each scan level. Fourthly, we sort all boundary points in the order of their minimum x values. And finally, even number of boundary points are selected and filled. Here, the data reordering analysis and experimental statistics between these two algorithms demonstrate the high efficiency of our new algorithm.
Keywords
Boundary Points, Concave Polygon, Polygon Filling, Vertex.- An Entropy Encoding Method for Routing Metadata in Ad HOC Network
Authors
1 Department of Computer Applications, Alagappa University, Karaikudi, Tamil, Nadu, IN
2 Department of Computer Science, Alagappa University, Karaikudi, Tamil, Nadu, IN
Source
International Journal of Knowledge Based Computer System, Vol 5, No 1 (2017), Pagination: 5-7Abstract
Secret common transmission among two or multiple nodes in a network resides at the ischolar_main of common communication security. In the existing system DSR (Dynamic Source Routing) algorithm is used when a user (sender) decides to send a packet to a destination node (receiver) in the Ad hoc network. Key wrapped technique is used to encode the ip address, path and data because the third party hacker cannot hack the information from the node and also maximum entropy is occurred. The aim of this research work is to reduce the entropy process in data transmission by identifying active nodes through which file/data are to be transmitted. The proposed method uses the selective reference algorithm to identify active nodes and the key wrapped technique to encode the ip address of the selected nodes and find it path/route to transfer the file/data. This proposed work reduces the entropy process and router path is more secure and provide high performance of ad hoc network during transmission.Keywords
Encoding, Entropy, Metadata, Routing.References
- W. Diffie, and M. E. Hellman, “New directions in cryptography,” IEEE Trans. Inf. Theory, vol. 22, no. 6, pp. 644-654, Nov. 1976.
- S. K. Park, and K. W. Miller, “True random number generators for cryptography,” in Cryptographic Engineering. New York, NY, USA: Springer, pp. 55-73, 2009.
- Y. Shah, W. Trappe, and N. Mandayam “Information-theoretically secret key generation for fading wireless channels,” IEEE Trans. Inf. Forensics Security, vol. 5, no. 2, pp. 240-254, Jun. 2010.
- M. Bloch, J. Barros, M. R. D. Rodrigues, and S. W. McLaughlin, “Wireless information-theoretic security,” IEEE Trans. Inf. Theory, vol. 54, no. 6, pp. 2515-2534, Jun. 2008.
- Y. Zhang, A. Juels, and M. K. Reiter, “Cross-VM side channels and their use to extract private keys,” IEEE Trans. Inf. Theory, vol. 39, no. 3, pp. 733-742, May 1993.
- https://fcit.usf.edu/network/chap1/chap1.htm
- https://en.wikipedia.org/wiki/ Network_security
- https://www.techopedia.com/definition/5868/ad-hoc-network
- https://en.wikipedia.org/wiki/Metadata
- https://en.wikipedia.org/wiki/ Router_(computing)